The Many Facets of System Partitioning
By Ron Wilson
Integrated System Design
Posted 07/13/01, 09:15:43 AM EDT
Partitioning. It 's a word we associate with thatcomputer architecture class we really wanted
to take back in, well, never mind. Mostly,the first
thing that comes to mind is architectural partitioning -- breaking an architecture up into functionally
independent pieces to permit different designers to
work on different blocks at the same time, while
strictly defining the interfaces and minimizing the
bandwidth between blocks.
The problem is that at the architectural level,
partitioning makes sense. We are grouping functions together because they work together. All too
often at lower levels, we are fracturing the design
not into pieces that make functional sense, but
merely into pieces that the design tools can handle.
Each of those partitionings creates its own unique
view of the design, so that moving between them --
as must be done during verification -- becomes a
treacherous journey.
In this issue of ISD we examine SoC partitioning
from this point of view -- the disparity between the
functional blocks of the architecture, the manageable blocks of the synthesis tools and the full-chip
view of extraction tools.
We look at three approaches to making the design converge, from three different design teams. In
our cover story,a chip design group at Agilent Technologies describes their SoC methodology. They
use up to seven layers of hierarchy,starting with
150,000-gate blocks acceptable to their physical-synthesis tool,and gradually build up by routing
blocks together. By keeping detailed routing inside
blocks that are small, and by carefully preplanning
interblock routing with floor-planning tools, Agilent
achieves timing convergence.
Another alternative is explored in two other papers, a full paper from ReShape Design and a short
working paper from Toshiba spin-off ArTile Microsystems. Both of those techniques also start out
with small, synthesizable blocks for timing closure. But those methodologies use abutted-block techniques to manage timing closure on interblock routing. Both,incidentally,rely on proprietary tools.
A quick note in our “Behind the News” section
hints at another kind of approach to the problem. Agilent moved from conventional synthesis and
timing estimates based on simple wire-load models
to physical synthesis in order to increase the underlying block size. That had the effect of reducing
the number of blocks in a typical chip from nearly
180 to about 70. Synplicity, in their entry into the
ASIC synthesis arena, claims that by increasing the
speed of a synthesis tool, and hence its capacity,
the synthesis block size can be pushed all the way
up to the place and route block size, eliminating
one of those big discontinuities. Of course,there
are a lot of footnotes to the company's claims, including big assumptions about wire-load models.
But here is the basis for an interesting discussion. How do we deal with the disparities between
architectural, synthesis, place and route,and extraction views of the design? Is there an approach
to fracturing of the architecture that leaves everyone,at every level, working on blocks that are functionally independent, physically meaningful and
yet tractable for the tools? Or will partitioning continue to be one of the unrevealed art forms of the
chip designer's craft?